Goto

Collaborating Authors

 descent direction




A Proofs

Neural Information Processing Systems

Lemma 1. Assume that Assumptions 1 and 2 hold, the iterations satisfy the following inequality for all k 2 N: Combine Assumption 2 with Definition 4.6, we have the second moment of g(W Summing both sides of this inequality for k 2{1,...,K} and recalling Assumption 2 (a) gives Rearranging above inequality and dividing further by K yields the result. The second condition in Eq. 4.10 ensures that lim Summing both sides of this inequality for k 2{1,...,K} and recalling Assumption 2(a) gives It guarantees that the model moves towards the descending direction of the loss function. Following the experimental setup in Section 5.1, we demonstrate that the proposed method empirically satisfies Assumption 2(b), and visualize in Figure 7 the sufficient direction constant µ for the (partial) convolutional layers of the four models during the end-to-end training using TREC. For SqueezeNet and ResNet-34, we show one block as the representative, since the other blocks have similar performance. Several insights can be drawn from Figure 7. (i) The value of µ of each convolutional layer is consistently greater than zero, indicating that Assumption 2(b) is satisfied, further ensuring the convergence of the TREC-equipped CNNs.



96f2d6069db8ad895c34e2285d25c0ed-Supplemental.pdf

Neural Information Processing Systems

Smooth convex optimization problems over polytopes are an important class of problems that appear in many settings, such as low-rank matrix completion [1],structured supervised learning [2,3],electrical flowsovergraphs [4],video co-localization in computer vision [5], traffic assignment problems [6], and submodular function minimization [7].